114 research outputs found

    Bounding the Impact of Unbounded Attacks in Stabilization

    Get PDF
    Self-stabilization is a versatile approach to fault-tolerance since it permits a distributed system to recover from any transient fault that arbitrarily corrupts the contents of all memories in the system. Byzantine tolerance is an attractive feature of distributed systems that permits to cope with arbitrary malicious behaviors. Combining these two properties proved difficult: it is impossible to contain the spatial impact of Byzantine nodes in a self-stabilizing context for global tasks such as tree orientation and tree construction. We present and illustrate a new concept of Byzantine containment in stabilization. Our property, called Strong Stabilization enables to contain the impact of Byzantine nodes if they actually perform too many Byzantine actions. We derive impossibility results for strong stabilization and present strongly stabilizing protocols for tree orientation and tree construction that are optimal with respect to the number of Byzantine nodes that can be tolerated in a self-stabilizing context

    Fast and compact self-stabilizing verification, computation, and fault detection of an MST

    Get PDF
    This paper demonstrates the usefulness of distributed local verification of proofs, as a tool for the design of self-stabilizing algorithms.In particular, it introduces a somewhat generalized notion of distributed local proofs, and utilizes it for improving the time complexity significantly, while maintaining space optimality. As a result, we show that optimizing the memory size carries at most a small cost in terms of time, in the context of Minimum Spanning Tree (MST). That is, we present algorithms that are both time and space efficient for both constructing an MST and for verifying it.This involves several parts that may be considered contributions in themselves.First, we generalize the notion of local proofs, trading off the time complexity for memory efficiency. This adds a dimension to the study of distributed local proofs, which has been gaining attention recently. Specifically, we design a (self-stabilizing) proof labeling scheme which is memory optimal (i.e., O(log⁥n)O(\log n) bits per node), and whose time complexity is O(log⁥2n)O(\log ^2 n) in synchronous networks, or O(Δlog⁥3n)O(\Delta \log ^3 n) time in asynchronous ones, where Δ\Delta is the maximum degree of nodes. This answers an open problem posed by Awerbuch and Varghese (FOCS 1991). We also show that Ω(log⁥n)\Omega(\log n) time is necessary, even in synchronous networks. Another property is that if ff faults occurred, then, within the requireddetection time above, they are detected by some node in the O(flog⁥n)O(f\log n) locality of each of the faults.Second, we show how to enhance a known transformer that makes input/output algorithms self-stabilizing. It now takes as input an efficient construction algorithm and an efficient self-stabilizing proof labeling scheme, and produces an efficient self-stabilizing algorithm. When used for MST, the transformer produces a memory optimal self-stabilizing algorithm, whose time complexity, namely, O(n)O(n), is significantly better even than that of previous algorithms. (The time complexity of previous MST algorithms that used Ω(log⁥2n)\Omega(\log^2 n) memory bits per node was O(n2)O(n^2), and the time for optimal space algorithms was O(n∣E∣)O(n|E|).) Inherited from our proof labelling scheme, our self-stabilising MST construction algorithm also has the following two properties: (1) if faults occur after the construction ended, then they are detected by some nodes within O(log⁥2n)O(\log ^2 n) time in synchronous networks, or within O(Δlog⁥3n)O(\Delta \log ^3 n) time in asynchronous ones, and (2) if ff faults occurred, then, within the required detection time above, they are detected within the O(flog⁥n)O(f\log n) locality of each of the faults. We also show how to improve the above two properties, at the expense of some increase in the memory

    Self-Stabilization, Byzantine Containment, and Maximizable Metrics: Necessary Conditions

    Get PDF
    Self-stabilization is a versatile approach to fault-tolerance since it permits a distributed system to recover from any transient fault that arbitrarily corrupts the contents of all memories in the system. Byzantine tolerance is an attractive feature of distributed systems that permits to cope with arbitrary malicious behaviors. We consider the well known problem of constructing a maximum metric tree in this context. Combining these two properties leads to some impossibility results. In this paper, we provide two necessary conditions to construct maximum metric tree in presence of transients and (permanent) Byzantine faults

    Stabilizing Maximal Independent Set in Unidirectional Networks is Hard

    Get PDF
    A distributed algorithm is self-stabilizing if after faults and attacks hit the system and place it in some arbitrary global state, the system recovers from this catastrophic situation without external intervention in finite time. In this paper, we consider the problem of constructing self-stabilizingly a \emph{maximal independent set} in uniform unidirectional networks of arbitrary shape. On the negative side, we present evidence that in uniform networks, \emph{deterministic} self-stabilization of this problem is \emph{impossible}. Also, the \emph{silence} property (\emph{i.e.} having communication fixed from some point in every execution) is impossible to guarantee, either for deterministic or for probabilistic variants of protocols. On the positive side, we present a deterministic protocol for networks with arbitrary unidirectional networks with unique identifiers that exhibits polynomial space and time complexity in asynchronous scheduling. We complement the study with probabilistic protocols for the uniform case: the first probabilistic protocol requires infinite memory but copes with asynchronous scheduling, while the second probabilistic protocol has polynomial space complexity but can only handle synchronous scheduling. Both probabilistic solutions have expected polynomial time complexity

    Quiescence of Self-stabilizing Gossiping among Mobile Agents in Graphs

    Get PDF
    This paper considers gossiping among mobile agents in graphs: agents move on the graph and have to disseminate their initial information to every other agent. We focus on self-stabilizing solutions for the gossip problem, where agents may start from arbitrary locations in arbitrary states. Self-stabilization requires (some of the) participating agents to keep moving forever, hinting at maximizing the number of agents that could be allowed to stop moving eventually. This paper formalizes the self-stabilizing agent gossip problem, introduces the quiescence number (i.e., the maximum number of eventually stopping agents) of self-stabilizing solutions and investigates the quiescence number with respect to several assumptions related to agent anonymity, synchrony, link duplex capacity, and whiteboard capacity

    Self-Stabilizing Token Distribution with Constant-Space for Trees

    Get PDF
    Self-stabilizing and silent distributed algorithms for token distribution in rooted tree networks are given. Initially, each process of a graph holds at most l tokens. Our goal is to distribute the tokens in the whole network so that every process holds exactly k tokens. In the initial configuration, the total number of tokens in the network may not be equal to nk where n is the number of processes in the network. The root process is given the ability to create a new token or remove a token from the network. We aim to minimize the convergence time, the number of token moves, and the space complexity. A self-stabilizing token distribution algorithm that converges within O(n l) asynchronous rounds and needs Theta(nh epsilon) redundant (or unnecessary) token moves is given, where epsilon = min(k,l-k) and h is the height of the tree network. Two novel ideas to reduce the number of redundant token moves are presented. One reduces the number of redundant token moves to O(nh) without any additional costs while the other reduces the number of redundant token moves to O(n), but increases the convergence time to O(nh l). All algorithms given have constant memory at each process and each link register
    • 

    corecore